Support up to 6 EdgeAI inference devices at the same time (GPU+2xMyriadX+3xEdgeTPU) | DNN can be inferenced in parallel mode without any performance losses | |
Each EdgeTPU is capable to run MobileNetV2 at 400fps | Significant performance boost for real applications (MobileNetV2 is most popular backbone) | |
Latest Intel x86 CPU (Elkhart Lake) with large amount of RAM (16GB) | Capable to run heavy computation algorithms for navigation and sensor data analysis | |
Low power consumption (less than 24W) | Longer battery mode operation, requires simplified heatsink/heat-spreader design | |
Rich-set of IO interfaces (8xUSB/6xUART/I2C/1W/DP) | Easy integration to any robot | |
Compact size (30 x 55 x 84 mm (1.18” x 2.17” x 3.31”) incl. heat-spreaders) | Easy integration to any robot | |
Wide-input power (7V - 35V) | Can be directly connected with 2S to 8S accu | |
Industrial temp. range support (*depends on used EdgeAI devices) | Can be used in industrial environment | |
EdgeAI devices are easy upgradable | Support of upcoming Keem Bay VPU and other EdgeAI devices for even better performance | |